home *** CD-ROM | disk | FTP | other *** search
- Short: AGF V0.9 - n*8-bit Sample Pre-Packing Processor
- Author: olethros@geocities.com (Christos Dimitrakakis)
- Uploader: olethros@geocities.com (Christos Dimitrakakis)
- Type: util/pack
- Requires: 68020+ (fpu opt.)
-
- OVERVIEW
-
- AGF is a sample pre-processor. It transofrms the data into a form having very
- little information content. This makes it easier for compression programs to
- pack it down to a small size. AGF combined with GZIP gives an average
- compression of 50% and it is always better than any other compression method on
- its own. It is similar to ADPCM, but better :)
-
- HISTORY
-
- 06-09-1999 : Released a version that works :)
- 05-09-1999 : Released a version that works properly (more or less)
-
- SUMMARY
-
- AGF - Adaptive Gradient-descent FIR filter.
-
- This is a neural-network-like adaptive FIR filter, employing a neural
- network of 32 neurons. The adaptation is deterministic, which means
- that the sample can be recovered from the processed file without
- needing to save an FIR coefficients to it as well. Adaptation is done
- on-line, on a sample-by-sample basis.
-
- USAGE
-
- AGF.fpu MODE sample processed_sample
- AGF.int MODE sample processed_sample
-
- The processed sample can then be efficiently packed with any kind of packer.
- I recommend xpk (xGZIP or xSQSH). lha/lzx will also do :)
- The results are always MUCH better.
-
- Modes:
- x : extract (decode) using a linear ANN
- c : compress (encode) using a linear ANN
- xd : extract (decode) using a static filter
- cd : compress (encode) using a static filter
-
- AGF.fpu & AGF.int, implement the same algorithm using floating point and fixed
- point representations respectively. The first one is compiled specifically for
- 68060 with FPU and the second for 68060 (using the math libs for any FPU
- instructions.. which are only a couple). The integer version is twice as fast
- on my 68030+68882.. and the packing performance difference is negligible. I
- expect the int version to be also faster on 060 machines (lots of MULs), but
- maybe the .fpu version is faster on 040.. test it..
-
-
- OUTPUT
-
- It outputs the average error of the ANN predictor and when it finishes it shows
- the values of the ANN weights.. in case you are interested :)
-
-
- TODO
-
- Add an RBF layer before the 32-neuron layer.
- Make an xpksublib out of it.
- Add options for adjusting the number of coefficients and adaptation rate.
-
-
- BUGS
-
- Bugs Reports to olethros@geocities.com with "AGF BUG" as the subject message please
-
- SEE ALSO
-
- see also dev/basic/gasp.lha for a similar pre-processor where the adaptive
- process is controlled by a Genetic Algorithm
-